Goto

Collaborating Authors

 user dynamic and fairness


Reviews: Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness

Neural Information Processing Systems

Originality: To the best of my knowledge the model of general user retention dynamics and corresponding statements evidencing negative feedback loops are novel contributions to the literature in sequential fairness works. The contributions of the paper would be clearer if citations were provided for methods and models introduced in earlier works (for example, I suggest adding citations for the fairness criteria in lines 149-158, for user departure models in 197-208, and for the statement in lines 173-174, if applicable). Since the full related work is deferred to the appendix, I see no need to cite [2, 3, 7, 10, 15, 16] without distinction between them. More context on what these works do and how they relate to your work is useful for readers to contextualize your contributions; please expand on the discussion of these papers. Quality: The simple and unifying model of sequential decision making presented is very valuable in my opinion.


Reviews: Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness

Neural Information Processing Systems

There was some disagreement among the reviewers. In a subsequent discussion, the positive points outweighed the negative ones and I am happy to support acceptance. The topic is important, the approach is reasonable and the empirical contribution, despite some caveats, is convincing. That being said, I ask the authors to incorporate the reviewers' suggestions in the final version of their paper.


Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness

Neural Information Processing Systems

Machine Learning (ML) models trained on data from multiple demographic groups can inherit representation disparity (Hashimoto et al., 2018) that may exist in the data: the model may be less favorable to groups contributing less to the training process; this in turn can degrade population retention in these groups over time, and exacerbate representation disparity in the long run. In this study, we seek to understand the interplay between ML decisions and the underlying group representation, how they evolve in a sequential framework, and how the use of fairness criteria plays a role in this process. We show that the representation disparity can easily worsen over time under a natural user dynamics (arrival and departure) model when decisions are made based on a commonly used objective and fairness criteria, resulting in some groups diminishing entirely from the sample pool in the long run. It highlights the fact that fairness criteria have to be defined while taking into consideration the impact of decisions on user dynamics. Toward this end, we explain how a proper fairness criterion can be selected based on a general user dynamics model.